Members
Overall Objectives
Research Program
Application Domains
Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Ontology-Based Query Answering with Existential Rules

Participants : Jean-François Baget, Fabien Garreau, Mélanie König, Michel Leclère, Marie-Laure Mugnier, Swan Rocher, Michaël Thomazo.

Note that for this section, as well as all sections in New Results, participants are given in alphabetical order.

This year we continued to work on the existential rule framework in the context of Ontology-Based Query Answering (a.k.a. Ontology-Based Data Access, OBDA). See the 2011-2012 activity reports for details on this framework (a.k.a. Tuple-Generating Dependencies or Datalog+/-). The ontology-based query answering issue consists in querying data while taking into account inferences enabled by an ontology. This ontology is here described by existential rules, a very expressive formalism which generalizes the lightweight description logics used for OBDA (e.g. the tractable fragments of the Semantic Web language OWL 2).

From 2009 to 2011, we mainly investigated decidability and complexity issues. In 2012, we tackled the next step, which consists in developing algorithms with good theoretical properties (they should at least run in the “right” worst-case complexity class) and with good performance in practice. There are two main ways of processing rules, namely forward chaining and backward chaining, which are also known as “materialization” and “query rewriting”. In forward chaining, rules are applied to enrich the initial data and query answering can then be solved solved by evaluating the query against the “saturated” database (as in a classical database system, i.e., forgetting the rules). Backward chaining process can be divided into two steps: first, the initial query is rewritten using the rules into a first-order query (typically a union of conjunctive queries, UCQ); then the rewritten query is evaluated against the initial database (again, as in a classical database system).

In 2013, on the one hand we focussed on the improvement of query rewriting algorithms, on the other hand we began to investigate extensions of our framework.

Improvement of Query Rewriting Algorithms

The advantage of the query rewriting approach is that the data are not modified (hence no write access permission is required and the data do not grow; moreover, there is no materialization that would need to be updated when data change). However, the practicability of this approach is questionable due to (1) the weak expressivity of classes for which efficient rewriters have been implemented, and (2) the large size of rewritings using UCQ.

With respect to the first point, we improved the algorithm designed in 2012. This algorithm accepts as input any set of existential rules and stops if this set of rules fulfills so-called finite unification set (fus) property, meaning that the set of rules allows to rewrite any query as a first-order query, e.g. a UCQ (this property is not true in general, where no finite rewriting may exist). We also studied properties of rewriting operators that ensure the correctness and the termination of a generic breadth-first rewriting algorithm and analyzed some operators with respect to these properties.

With respect to the second point, we defined semi-conjunctive queries (SCQs), which are a syntactical extension of conjunctive queries. We designed and implemented an algorithm called Compact, which computes sound and complete rewritings of a conjunctive query in the form of a union of SCQs (USCQs). As in the above work, any kind of existntial rules can be considered, however the algorithm is ensured to stop only for fus rules. First experiments show that USCQs are both very efficiently computable and more efficiently evaluable than their equivalent UCQs.

Ongoing Work: Extensions of the Framework

Inconsistent-tolerant query answering. It may be the case that the data are inconsistent with the ontology, specially when there are several data sources. The classic logical framework becomes inappropriate since an inconsistent logical theory entails everything. Therefore, inconsistency-tolerant semantics have been defined to get meaningful answers. These semantics are based on the notion of repairs, which are maximal subsets of the data consistent with the ontology. In the most natural semantics, a tuple is an answer to the query if it is an answer in each repair. This issue is relevant to Pagoda and Qualinca, two ANR projects respectively started in 2013 and 2012 (see Section  8.1 ). Swan Rocher's master thesis was devoted to a query answering algorithm in this framework, where the ontology is described by existential rules and negative constraints.

Existential Rules with non-monotonic negation. Non-monotonic negation is very useful for modeling purposes. We added non-monotonic negation to existential rules, under stable model semantics. This brought us close to logic programs considered in the area called Answer Set Programming. First results were obtained on the semantics and decidability of query answering with these rules. This work is part of ASPIQ project started in 2013 (see Section  8.1 ).

Others

Michael Thomazo defended his PhD thesis entitled “Conjunctive Query Answering Under Existential Rules —Decidability, Complexity, and Algorithms” (Oct. 2013). The main contributions of this thesis are the following: first, a unified view of the currently known existential rule classes ensuring decidability of query answering, together with a complexity analysis and a worst-case optimal algorithm for a new generic class, which generalizes a family of very expressive decidable classes (see the gbts class in 2012 activity report); second, a generic algorithm for query rewriting, which overcomes some causes of combinatorial explosion that make classical approaches inapplicable.

The journal version extending the papers at IJCAI 2011 and KR 2012, in collaboration with Sebastian Rudolph (TU Dresden), is still in preparation but almost finished (postponement due to the addition of complementary results).